Patterns, Rules, & Discoveries in Life and in Science

نویسنده

  • David Klahr
چکیده

Representations of Reality: Surveys and Maps The moral of this next story is that when we measure, record, and analyze something in the real world, we create knowledge, but that knowledge is inherently approximate and intentionally abstract. The abstraction process is elegant, and the approximation process is unavoidable. Moreover, participating in both processes can be deeply satisfying, as they were for me when I first experienced them. As in the first example, this resonance is something I did not realize when I first encountered it, but which, as I reflect upon the deeper forces that have kept me on the path of science, seem to have been very important. Here's the story. When I was in high school and college, I worked after school, and for a couple of summers, as a surveyor's assistant. We did property surveys, ran lines for new roads and sewers, collected data for boundary disputes, surveyed the scenes of traffic accidents, and all the other sorts of things done by survey crews that you see with their tripods, transits, and plumb bobs. Figure 1. An array of watch parts, dutifully and creatively sorted. DRAFT of Klahr, D. (2012) Patterns, Rules, & Discoveries in Life and in Science. In Carver, S., & Shrager, J..(Eds.) The Journey From Child to Scientist: Integrating Cognitive Development and the Education Sciences. Washington DC: American Psychological Association 4 A typical job might be one in which we were hired to make a property map of, for example, the lovely home on a pond depicted in Figure 2a. Imagine yourself in the setting. The grass is green, the birds are singing, the bees are buzzing, the air is hot, the ground is damp, and the pond has a slightly musty smell a rich setting for the senses. We would arrive with our beat up Willy's Jeep, and I would take the transit out of its box, set it on the tripod, and get the steel measuring tape. My job was to schlep the equipment, hold the "rod" for the guy looking through the transit, cut through brush so as to create a line of sight for the surveyor and his transit, and to do other "grunt" work. But I watched what these guys did, and I was fascinated. From the transit they would read, as accurately as possible from the vernier scale on the circumference of the transit base, the exact angle, to minutes and seconds of arc, of each turn of the transit to the next survey point. Then they would measure the distance from one point in the ground to another (a corner of the lot, an edge of the house or the driveway, etc.) with a 100 ft. steel measuring tape, as one of us held a plumb bob as closely as possible over a survey point in the ground, and we would pull the tape taut to a pre-specified load on a small spring tension measuring device, so that we knew, for example, that we had exactly ten pounds of horizontal force to control for the catenary sag in the tape. For each measurement, the survey chief would carefully pencil an entry into his battered field book. (No computers in those days!) Only later did I realize that no matter how hard we tried, how careful we were, there was always some error: in reading the vernier scales on the transit, or in locating the plumb bob precisely over a survey point in the ground, or in measuring distances, even with a steel tape and the tension corrections. Of course, I knew that the stuff we did in science lab in school was always full of errors, but I thought of error as a kind of "mistake" rather than an inherent aspect of measurement. I learned an important lesson early, but surely never explicitly articulated in my surveying days, that error is unavoidable in science. But there was a more important lesson, and it was more abstract, and, for me, more profound. For once we returned to the office, I would watch as the information from the field book was transformed into a map, with the help of straight edge and compass. The challenge was to start at a specific point on the paper, draw straight lines and intersecting angles at scale, such that they would correspond to the "real world" angular and linear measurements. The "holy grail" in this endeavor was to get the end point of the final line on the paper to end precisely on top of the start point of the first line. This "closing the survey” resulted in a lot of satisfaction and pride among the survey crew. As I observed this process, I was fascinated with the way in which all of our efforts in the field, in the real world, with stumps and bumps, rocks and buildings, and briars and mud, would be transcribed from the field books into maps of the kind shown in Figure 2a, where the house and pond had been transformed into a symbolic abstraction. Much was lost, but what was essential for the purposes of the survey had been retained. Moreover, some knowledge existed in the abstractions that did not exist in the real world. The distances, angles, elevations, and contour lines, all culminated in a succinct simplification that revealed new relations among the elements. DRAFT of Klahr, D. (2012) Patterns, Rules, & Discoveries in Life and in Science. In Carver, S., & Shrager, J..(Eds.) The Journey From Child to Scientist: Integrating Cognitive Development and the Education Sciences. Washington DC: American Psychological Association 5 Figure 2. a. From the real world to the abstracted and quantified world of surveying. b. From the real world of children to a spreadsheet and ANOVA results. Isn’t that what do we do as psychologists? We might be studying scientific reasoning, or problem solving, language acquisition, or number concepts; but in all cases, we extract, from the richness of each individual case, only what is of interest to us, and we leave the rest behind. In the kind of work with which I am most familiar, the primary "yield" from many hours of data collection with many children is a spreadsheet with columns for the various conditions and measurements and rows for the children. That is, each child's response to our challenges becomes a row in a spreadsheet. That's all that's left. That's all we want to examine. We have retained what's essential for our purposes and discarded the rest: the children's voices, smiles, cute behaviors, funny but irrelevant comments, and so on. From 50 children to 50 rows in a spreadsheet. And then we abstract again. We take the data, we pour it into our statistics package, and we aggregate and simplify even further in order to tell our story. We present effects, contrasts, and d' values. By selective simplification, we have created a new entity, a new kind of knowledge, that did not exist until we did those transformations. The point of this lesson is summarized in Figure 2b. That is, I see a direct analogy between the translation from the physical world to the surveyor's field notes to the final map on one hand, and the real children, our data sheets, and our extracted statistical models on the other. Why do I find this so interesting, challenging, and satisfying? Well, my exercise in selfanalysis is claiming that I was imprinted at a tender age on these aspects and features of DRAFT of Klahr, D. (2012) Patterns, Rules, & Discoveries in Life and in Science. In Carver, S., & Shrager, J..(Eds.) The Journey From Child to Scientist: Integrating Cognitive Development and the Education Sciences. Washington DC: American Psychological Association 6 surveying because the job came with two powerful affective components. First, it had high prestige amongst my nerdy friends (in other words, all of my high-school friends) because I had been chosen for the position–ahead of my classmates –on the basis of my physics teacher's recommendation as technically competent and reliable. Second, it had high status, even more broadly, because surveying was associated with "macho" construction jobs, with being outdoors, and with working under severe, and occasionally somewhat dangerous, conditions. So the affective aspect was tremendously fulfilling, and the intellectual part is strongly associated with what we do as researchers. That is, the process of doing research is just like what used to happen in my surveying days when we would go from the survey in the damp, muddy, buggy, field to the map or blueprint based on the survey. I'm convinced that my early affective and cognitive experiences as a surveyor's assistant gave me a deeply embedded, although unarticulated, understanding of and attachment to both the elegance and limitations of the research process. Knowledge Driven Search Trumps Trial and Error In graduate school, I began to learn about formal models of problem solving and decision making, and about the profound difference in efficiency of "knowledge driven" search over "trial and error" search. I also discovered that I had already had a personal experience in which I had seen this contrast in action, a personal experience that, when I eventually encountered a formal description of it, really rang a bell. My first job after graduating from college was as a computer programmer working for Wolf Research and Development Company, a very small (~ 10 employees) company in Boston that had several Air Force contracts involving computer programming. My first assignment was to work on what we then called "an adaptive program", but which today would look like some pretty simple machine learning work. That was the sort of thing that had attracted me to the job, because my senior thesis at MIT involved a primitive bit of artificial intelligence – writing a program that learned how to play the game "NIM" by watching an expert play it. However, Wolf also did a lot of "bread and butter" work that was mainly taking data in one format and converting it to another, for example, taking readings from a radar set based on azimuth, elevation, and distance from the radar site and converting it to latitude and longitude. The tasks were pretty straightforward conceptually; but in those days of millisecond machines with only two thousand words of memory, even these mundane tasks took a lot of ingenuity. After I had been at Wolf for a year or so, they landed a big contract with the North American Aerospace Defense Command (NORAD) in Colorado Springs, Colorado, and being young, single, and eager to travel, I jumped at the chance to move west to work on the project. What did NORAD do? Well, as anyone over 50 or so will recall, these were very serious and crazy times. We were engaged in a "Cold War" with the USSR, and the fundamental military strategy was called Mutually Assured Destruction, or MAD. And mad it was. The basic idea was for each side to guarantee that if one side attacked, the other side would immediately counter attack. Each side knew that they could not intercept the other's nuclear-armed intercontinental ballistic missiles, but they also knew that they could launch enough missiles of their own to destroy the initial attacker, even as they were being destroyed. So nobody wins, and everybody loses. NORAD played a key role in this astoundingly insane zeitgeist, because its job was to determine whether or not anything coming over the horizon was a missile. This decision might not seem to have been much of a challenge, because the United States had enormous radars – 2 When you tire of reading this chapter, try this: http://www.archimedes-lab.org/game_nim/nim.html# DRAFT of Klahr, D. (2012) Patterns, Rules, & Discoveries in Life and in Science. In Carver, S., & Shrager, J..(Eds.) The Journey From Child to Scientist: Integrating Cognitive Development and the Education Sciences. Washington DC: American Psychological Association 7 approximately the size of a football field tipped on its side –sitting in Alaska, Turkey, and England, pointed toward Russia, scanning the horizon. However, there was a problem, because even in the early 60's a lot of objects -ranging from exploded rocket boosters 3 to nuts and bolts -were coming over the horizon every hour, and they were all harmless. Even in the early sixties, there were many objects in near earth orbit. So the big radars peering over the horizon were seeing a lot of moving objects and sending the signals of their tracks to NORAD, at which point our computers would try to determine whether any of these were in a ballistic trajectory–indicating that the Russians had launched their missiles–or in an orbital trajectory, indicative of harmless pieces of metal circling the earth. These computations had to be completed quickly, because it only takes about 15 minutes for a nuclear-armed ICBM to get from launch to target. They also had to be done correctly, because a false negative meant the end of the New York or Washington or our building in Colorado Springs! A false positive meant the end of civilization. The basic computational problem was to match the "track" of the sighted object to either a ballistic or an orbital trajectory. For a single object, this would not have been much of a challenge, even with the existing computational power; but, as I noted above, there were many objects, and thus many tracks to compute ... long before the days of parallel computers. Of course, we did not have much computational power ... certainly not by today's standards. NORAD's state-of-the-art computer was the Philco 2000: 32K memory, a 1M disk, and 22K multiplications per second. (For the non-technical reader, think of it this way. Your cell phone has about two thousand times as much memory as the computer that was at the heart of the defense system of the "free world" in the 60s.) The programming teams tried various clever ways to do this discrimination as efficiently as possible. Of course it was all in assembler code, so it was very labor intensive. And then someone had a brilliant idea. So brilliant, and so obvious, that it made a deep impression on me. Here's the idea: Instead of treating each observation as something totally unknown, make use of what you already know. You know that object X is in orbit, and that means you can predict exactly where and when it should come over the horizon in about 90 minutes. So rather than treat each sighting as if you know nothing, once you know what you are looking at on the horizon now, just revise its orbital path a bit, and predict where it should show up the next time around–and you have plenty of time to do it. If, when you look at the first few blips that you think are object X, and those the blips fit the prediction, then you are done with that guy–and all the data associated with that sighting –for another 90 minutes. You just have to make a slight revision to the known orbit. If it’s not there, then it blew up or disintegrated on its last trip around. And that leaves you lots of computational power to focus on the remaining unexpected blips on the horizon. Simple and elegant: Knowledge trumps brute force computation. Theory guided search is the way to go!! I wish I'd thought of that, but I didn't. However, I never forgot the lesson. Always ask yourself, "what do I already know?", before starting a complicated search. Or to put it in terms that Kevin Dunbar, Anne Fay, Chris Schunn and I used in our work on scientific reasoning 3 Today there are an estimated 20,000 objects at least as large as an apple, and perhaps half a million smaller objects, in near earth orbit. In fact, Vanguard I, launched in 1958, is still in earth orbit. These objects pose an ever increasing danger to space missions. 4 When I worked at NORAD, it had not yet moved into the "hard site" hundreds of feet underground in Cheyenne Mountain. Our building was called a "soft site". DRAFT of Klahr, D. (2012) Patterns, Rules, & Discoveries in Life and in Science. In Carver, S., & Shrager, J..(Eds.) The Journey From Child to Scientist: Integrating Cognitive Development and the Education Sciences. Washington DC: American Psychological Association 8 (Klahr, Fay & Dunbar, 1993; Schunn & Klahr, 1992, 1996), your location in the hypothesis space should guide your search in the experiment space. Little did I know at the time that my experience of peering into "real" space would influence my research in "cognitive spaces". Even today, with all of the incredible computing power available to us, the big advances in computer science come from ingenious formulations of problems, rather than from brute force computation. Serendipity at Stanford So much for introspections on early influences. But while I have been focusing on the ways in which specific aspects of my varied experiences have contributed to the attraction and satisfaction of my career as a scientist, I have yet to explain how, given my engineering and programming background, I became a particular kind of scientist one with an interest in cognitive and developmental psychology. That requires one more personal anecdote, one that was truly transformative, and entirely serendipitous, for it redirected me from one kind of scientific career to another. The first step on the path to that event was not particularly unusual, so I won't describe it in any detail. It took place in the fall of 1962 when I left the lovely town of Colorado Springs, nestled at the foot of Pikes Peak, and drove to smoky Pittsburgh in my hot little TR-3 sports car, to enter a Ph.D. program in Organizational Behavior in the Graduate School of Industrial Administration (GSIA) at Carnegie Tech (now called the Tepper School of Business at Carnegie Mellon University). I had been attracted to that program because Herb Simon and Allan Newell were at Carnegie Tech as central players in what became called "the cognitive revolution" in the late 50's and early 60's and thus GSIA seemed an ideal place to pursue my long standing interest in doing intelligent things with computers. After a couple of years of courses in Organization Theory, Economics, and Management Decision Making in a Ph.D. program that Newell and Simon called "Systems and Communication Sciences" (the precursor to what became Carnegie Mellon's School of Computer Science), I had just begun to formulate my dissertation topic on using multidimensional scaling techniques (Kruskal, 1963) to characterize the decision making process of college admissions officers (Klahr, 1969b). But I was still doing background reading and not fully engaged in the work. Along the way, I had learned how to program in one of the then-novel "list programming languages" called IPL-V. Thus, in the Spring of 1965, when I was about half way through my graduate program in GSIA (now Tepper), I happened to be schmoozing with one of the GSIA faculty, Walter Reitiman. I asked him what his summer plans were, and he told me he was going to a 6-week summer conference at Stanford. "Sounds nice," I said. "Want to come?", he asked. "I could use a teaching assistant on how to construct cognitive models in IPL-V." It didn't take a lot of thought 5 However, I was sufficiently interested in multidimensional scaling to publish a paper on the topic that became one of my most widely cited, even though I never did another psychometric paper (Klahr, 1969a). 6 This was Carnegie Tech's competitor with MIT's LISP. Although IPL preceded LISP by a couple of years, LISP went on to completely dominate AI programming. Nevertheless, IPL was the language in which many of the landmark programs in AI (EPAM, the Logic Theorist, and the early Chess programs) were created. 7 Reitman was a true innovator who challenged the seriality of the Newell & Simon approach to cognition by proposing a radically different computational architecture that he called "Argus", inventing, in effect, connectionist computational concepts 20 years before the beginning of PDP modeling (Reitman, 1964, 1965). He was also the founding editor of the journal Cognitive

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A QUADRATIC MARGIN-BASED MODEL FOR WEIGHTING FUZZY CLASSIFICATION RULES INSPIRED BY SUPPORT VECTOR MACHINES

Recently, tuning the weights of the rules in Fuzzy Rule-Base Classification Systems is researched in order to improve the accuracy of classification. In this paper, a margin-based optimization model, inspired by Support Vector Machine classifiers, is proposed to compute these fuzzy rule weights. This approach not only  considers both accuracy and generalization criteria in a single objective fu...

متن کامل

Historical Development of Method in Science and Philosophy

  The development and growth of science is based on method. Aristotle treated this issue seriously and formulated logic. Muslim thinkers accepted the comprehensibility of his idea and during some centuries developed it and exported to the west. During the last three centuries of the Middle Ages the issue of method was followed up seriously and induction enjoyed more importance. The scientific ...

متن کامل

Eating Time Modulations of Physiology and Health: Life Lessons from Human and Ruminant Models

Tissue nutrient supply may be synchronized with endogenous physiological rhythms to optimize animal and human health. Glucose tolerance and insulin sensitivity have endogenous rhythms that are not essentially dependent on food type and eating. Human glucose tolerance declines as day comes into night. Based on such evolutionary findings, large evening meals must be avoided to reduce risks of vis...

متن کامل

Criteria and Indicators of Presence Quality Improvement in Urban Spaces (Case Study: Historical Texture of Kashan city)

Urban space has been formed in historical texture of cities under the influence of meaning and shape on the onehand and qualities related to human behaviors on the other hand. Among these points, neighborhood centers as themost important areas of social life in urban open space play a critical role in attracting people and improving qualityand vitality of environment. This study presents the re...

متن کامل

Incremental Mining for Temporal Association Rules for Crime Pattern Discoveries

In recent years, the concept of temporal association rule (TAR) has been introduced in order to solve the problem on handling time series by including time expressions into association rules. In real life situations, temporal databases are often appended or updated. Rescanning the complete database every time is impractical while existing incremental mining techniques cannot deal with temporal ...

متن کامل

USING DISTRIBUTION OF DATA TO ENHANCE PERFORMANCE OF FUZZY CLASSIFICATION SYSTEMS

This paper considers the automatic design of fuzzy rule-basedclassification systems based on labeled data. The classification performance andinterpretability are of major importance in these systems. In this paper, weutilize the distribution of training patterns in decision subspace of each fuzzyrule to improve its initially assigned certainty grade (i.e. rule weight). Ourapproach uses a punish...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013